Course Materials

Tree Dormancy (Chapter 3)

Exercises on tree dormancy

  1. Put yourself in the place of a breeder who wants to calculate the temperature requirements of a newly released cultivar. Which method will you use to calculate the chilling and forcing periods? Please justify your answer.

    As temperature requirements are dependent on the cultivar, new data about the date of dormancy has to be collected with an experimental approach first. If this data was already collected by an institute one can calculate the chilling and forcing period by using weather data and statistical methods such as a partial least square (PLS) regression analysis (Luedeling, Kunz, and Blanke 2013). With this method the chilling phase can be calculated using the dynamic model and the forcing phase is calculated using growing degree dates (GDD). Knowing in which specific timespan chilling and heat is especially important for a cultivar a fitting one can be selected for the region. Site-specific knowledge is also essential to pick future-proof cultivars that are already adapted for global warming.

  2. Which are the advantages (2) of the BBCH scale compared with earlies scales?

    • Standardizes the phenological stages and make them easily recognizable under all field conditions across species to have an easily comparable metric for classification.
    • Two digit code allows two orders of scale, where the macro stage (principal growth stages) is represented by the first digit and describes time spans and the second digit specifies micro stages (secondary growth stages) precise steps within.

  3. Classify the following phenological stages of sweet cherry according to the BBCH scale: BBCH_states_cherry Left image: 56 - Flower pedicel elongating (in the top most bud sepals can be seen as a white tip)
    Middle image: 61 to 65 - Flowering (all flowers to be seen are open, but to determine the correct percentage a broader view is needed)
    Right image: 89 - Fruit ripe for harvesting
    (See Fadón, Herrero, and Rodrigo (2015) for reference)

Climate change and impact projection (Chapter 4)

Exercises on climate change

  1. List the main drivers of climate change at the decade to century scale, and briefly explain the mechanism through which the currently most important driver affects our climate.

    • Sun: Variation in radiation over time.
    • Aerosols: Various molecules or particles suspended in the air. Can block radiation and can thus affect local climate massively (e.g. smog). Both natural and anthropogenic sources exists.
    • Clouds: Blocks radiation in both directions, so incoming radiation is partially kept out and reflected radiation from earth gets trapped.
    • Ozone: Ozone blocks harmful UV-A radiation but also acts as a greenhouse gas. Destroying the natural ozone shield heightened risk of skin cancer and cause(d) environmental damage.
    • Surface albedo: Surfaces reflect/absorb radiation dependent on their material. Greenhouse gases (GHGs): Absorb radiation at certain wavelengths (Long-wave radiation from the earth surface). Major GHGs are water vapour, carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O). By far the biggest driver of the recent climate change.

  2. Explain briefly what is special about temperature dynamics of the recent decades, and why we have good reasons to be concerned.

    Considering the two last millenniums up to a few decades ago the global climate was quite cold on average. Looking at the last decade the the average temperature keeps rising above every record previously know. The Problem with the increasing temperatures are not only direct effects, but positive feedback such as melting ice caps or defrosting permafrost landscapes. With increasing temperature the surface albedo changes and will absorb even more radiation or frozen methane is being released which will increase the warming even more. While such periods exist in earth history already, they were spread across million years. This time gave enough time for evolution to adapt and move across the planet. What usually happens on such a large temporal scale is now happening within a decade with no time for such adaptation.

  3. What does the abbreviation ‘RCP’ stand for, how are RCPs defined, and what is their role in projecting future climates?

    The Representative Concentration Pathways (RCPs) are possible climate scenarios simulating the impact of different GHG emission pathways from 2000 to 2100. The number after RCP (e.g. RCP6.0) stands for the additional expected radiative forcing (W m-2) by 2100. They are large scale simulations from which other studies derive their data to calculate global climate models and with some model based downscaling (shouldn’t it be upscaling?) regional climate models.

  4. Briefly describe the 4 climate impact projection methods described in the fourth video.

    • Statistical models: Establish a relationship between different climatic parameters and ecological data which is then used to project future distributions. Because of the wide range of results using different algorithms using an ensemble has become common. But there are also other limitations. The statistical relationship doesn’t have to remain valid due to e.g. other factors changing that weren’t included in the original model. It is also assuming that the species distribution is in an equilibrium with the climate, which may change or is already not the case as humans plant what they need, not what is naturally best suited.
    • Process-based models: Represents all major systems processes with equations based on the best scientific knowledge on each of the subjects. They are very precise but highly specific, which results in extensive assumptions and/or extensive parametrization. Even seemingly small processes are often not sufficiently understood and such models for large and complex systems don’t even exists. There are also often linear assumptions build into these process models, which might not be valid in the future.
    • Climate analogue models: Projected climate at a given location might already exists today somewhere else with similar climate conditions. While the general idea sounds promising, there are a lot of other aspects being more important than the difference in climate itself. If differences are to be found, there is also the question of origin. Are these differences due to climate or other factors? The work to balance out all these different factors to make the sides comparable starts to not make this kind of approach feasible anymore.


      Here is an extensive overview of these three methods: climate_impact_projections_methods_workflow climate_impact_projections_methods_pro_con
      Source: Luedeling et al. (2014)

    • Complex systems: The approach to model complex system is a holistic one incorporating any relevant information from all kind of resources. Handling uncertainties very well, it allows rough estimates to be inputs of the model and can thus cover all relevant aspects without extensive studies. The core are causal models which combine the information to calculate an outcome distribution. This approach is used in Decision Science and can be seen in this project from last semester.

Winter chill projections (Chapter 5)

Exercises on past chill projections

  1. Sketch out three data access and processing challenges that had to be overcome in order to produce chill projections with state-of-the-art methodology.

    • Getting regional high resolution temperature data (Oman): To calculate chilling hours per day high resolution temperature data is needed. But as the temperature sensors where only being set up since a few years, high resolution data for several years was missing. This problem was overcome by a luckily nearby airport which generously donated its data.
    • Processing global data into high-resolution regional data (Tunisia): To calculate suitable data for a regional study you have to use rasterized data from the GCMs and upscale them using a regional climate model. But considering an ensemble study is required, as usually the case, this scales processing and data storage requirements exponentially. This further imposes problems if no automated way of processing the data is set up.
    • Processing data without programming language, usable models or an API (up to California): Up to now a lot of data was analysed manually with excel or filling creating data with models. This took a lot of time and was cumbersome. Different programms where used all requiring different input methods. This was later solved with an programming language.
    • Processing multiple scenarios increases processing and storage requirements exponentially (Kenya)

  2. Outline, in your understanding, the basic steps that are necessary to make such projections.

    test

Manual chill analysis (Chapter 6)

Exercises on basic chill modelling

  1. Write a basic function that calculates warm hours (>25°C)
  2. Apply this function to the Winters_hours_gaps dataset
  3. Extend this function, so that it can take start and end dates as inputs and sums up warm hours between these dates

To evaluate the performance of different methods all these exercises where combined and extended in the following script.

library(chillR)
library(microbenchmark)

Winters_hours_gaps <- Winters_hours_gaps

# it seems like ifelse() is actually quite slow. The dplyr version if_else() is faster, but not as versatile
warm_hours_ifelse <- function(num_vec){
  warm_hours_vec <- ifelse(num_vec > 25, TRUE, FALSE)
  return(warm_hours_vec)
  }

# Efficient way to calculate a boolean vector (neglects NAs, but still more efficient with NA test.)
warm_hours_rep <- function(num_vec){
  warm_hours_vec <- rep(FALSE, length(num_vec))
  warm_hours_vec[num_vec > 25] <- TRUE
  #warm_hours_vec[is.na(num_vec)] <- NA
  return(warm_hours_vec)
}

microbenchmark(
  warm_hours_rep = {warm_hours_rep(Winters_hours_gaps$Temp)},
  warm_hours_ifelse = {warm_hours_ifelse(Winters_hours_gaps$Temp)},
  times = 10000
  )
## Unit: microseconds
##               expr    min     lq     mean median     uq      max neval cld
##     warm_hours_rep 20.268 21.079 24.81729 21.380 21.791 2937.037 10000  a 
##  warm_hours_ifelse 68.067 71.233 91.68825 72.206 73.798 5118.497 10000   b
# Input single number which then is transformed to fit current table schema
# Advantages: no hassle with POSIXct
filter_temp_on_num_dates <- function(df, start_date, end_date){
  start_year<-trunc(start_date/1000000)
  start_month<-trunc((start_date-start_year*1000000)/10000)
  start_day<-trunc((start_date-start_year*1000000-start_month*10000) / 100)
  start_hour<-start_date-start_year*1000000-start_month*10000-start_day*100
  end_year<-trunc(end_date/1000000)
  end_month<-trunc((end_date-end_year*1000000)/10000)
  end_day<-trunc((end_date-end_year*1000000-end_month*10000) / 100)
  end_hour<-end_date-end_year*1000000-end_month*10000-end_day*100
  start_row <- which((df$Year == start_year) & 
                       (df$Month == start_month) & 
                       (df$Day == start_day) & 
                       (df$Hour == start_hour))
  end_row <- which((df$Year == end_year) & 
                     (df$Month == end_month) & 
                     (df$Day == end_day) & 
                     (df$Hour == end_hour))
  return(sum(warm_hours_rep(df[start_row:end_row,]$Temp)))
}

# Input POSIXct dates, which are then transform to fit current table schema
# Comment: No real performance advantage but enables standardized POSIXct input
# which may help for the transition
filter_temp_on_dates <- function(df, start_date, end_date){
  start_date <- as.POSIXlt(start_date)
  end_date <- as.POSIXlt(end_date)
  start_year = 1900 + start_date$year
  start_month = start_date$mon + 1 # Valuerange is 0-12
  start_day = start_date$mday
  start_hour = start_date$hour
  end_year = 1900 + end_date$year
  end_month = end_date$mon +1
  end_day = end_date$mday
  end_hour = end_date$hour
  start_row <- which((df$Year == start_year) & 
                       (df$Month == start_month) & 
                       (df$Day == start_day) & 
                       (df$Hour == start_hour))
  end_row <- which((df$Year == end_year) & 
                     (df$Month == end_month) & 
                     (df$Day == end_day) & 
                     (df$Hour == end_hour))
  return(sum(warm_hours_rep(df[start_row:end_row,]$Temp)))
}

# Filter with POSIXct dates on a POSIXct date column
# Comment: Easy to understand, fast to program and 
# if exact start or end date/hour isn't available it still works
filter_temp_on_dates_with_dates <- function(df, start_date, end_date){
  return(sum(warm_hours_rep(df[df$date >= start_date & df$date <= end_date])))
}

# Preparation
start_date = as.POSIXct("2008030400", tz = "UTC", tryFormats = c("%Y%m%d", "%Y%m%d%H"))
end_date = as.POSIXct("2008063023", tz = "UTC", tryFormats = c("%Y%m%d", "%Y%m%d%H"))
start_date_num = 2008030400
end_date_num = 2008053023
Winters_hours_gaps_real_dates <- within(Winters_hours_gaps, 
                                        Date <- as.POSIXct(
                                          sprintf("%d-%02d-%02d %02d:00", Year, Month, Day, Hour))
                                        )[,c("Date","Temp_gaps", "Temp")]

res_benchmark <- microbenchmark(
  using_numbers = {
    filter_temp_on_num_dates(Winters_hours_gaps, start_date_num, end_date_num)
  },
  using_dates = {
    filter_temp_on_dates(Winters_hours_gaps, start_date, end_date)
  },
  using_dates_on_dates = {
    filter_temp_on_dates_with_dates(Winters_hours_gaps_real_dates, start_date, end_date)
  },
  times = 100000
)
print(res_benchmark, signif = 3)
## Unit: microseconds
##                  expr   min    lq  mean median    uq    max neval cld
##         using_numbers 220.0 243.0 291.0  246.0 250.0 128000 1e+05  b 
##           using_dates 245.0 267.0 328.0  271.0 275.0 123000 1e+05   c
##  using_dates_on_dates  61.4  65.9  70.2   68.6  70.8   3030 1e+05 a

Chill Models (Chapter 7)

Exercises on chill models

  1. Run the chilling() function on the Winters_hours_gap dataset
  2. Create your own temperature-weighting chill model using the step_model() function
  3. Run this model on the Winters_hours_gaps dataset using the tempResponse() function.
require(chillR)
?Chilling_Hours
Chilling_Hours
## function (HourTemp, summ = TRUE) 
## {
##     CH_range <- which(HourTemp <= 7.2 & HourTemp >= 0)
##     CH_weights <- rep(0, length(HourTemp))
##     CH_weights[CH_range] <- 1
##     if (summ == TRUE) 
##         return(cumsum(CH_weights))
##     else return(CH_weights)
## }
## <bytecode: 0x561e39695e48>
## <environment: namespace:chillR>
#Chilling_Hours(Winters_hours_gaps$Temp, summ = FALSE)
plot(Utah_Model(Winters_hours_gaps$Temp, summ = TRUE))

own_step_model <- function (HourTemp, summ = TRUE){
  return(step_model(HourTemp, 
                    df = data.frame(lower = c(-1000, 1.5, 2.5, 9, 12.5, 16, 18), 
                                    upper = c(1.5, 2.5, 9, 12.5, 16, 18, 1000), 
                                    weight = c(0, 0.5, 1, 0.5, 0, -0.5, -1)),
                    summ = summ))
}
own_step_model(Winters_hours_gaps$Temp, summ = TRUE)[1:100]
##   [1]  0.0 -0.5 -1.5 -2.5 -3.5 -4.5 -5.5 -6.0 -6.0 -6.0 -5.5 -5.0 -4.5 -3.5 -2.5
##  [16] -1.5 -0.5  0.0  1.0  2.0  3.0  4.0  4.5  4.5  4.5  3.5  2.5  1.5  0.5 -0.5
##  [31] -1.5 -2.5 -3.0 -3.0 -2.5 -2.0 -1.5 -1.0  0.0  0.5  1.0  1.5  1.5  2.0  2.5
##  [46]  3.0  3.5  3.5  3.5  3.0  2.0  1.0  0.0 -1.0 -2.0 -3.0 -3.5 -3.5 -3.0 -2.5
##  [61] -1.5 -0.5  0.5  1.5  2.5  3.5  4.5  5.5  6.5  7.5  8.5  9.0  9.0  8.5  8.0
##  [76]  7.5  7.0  6.5  6.0  5.5  5.5  6.0  6.5  7.0  8.0  9.0 10.0 11.0 12.0 13.0
##  [91] 14.0 15.0 16.0 17.0 17.5 17.5 17.5 17.0 16.0 15.0
#Dynamic_Model(Winters_hours_gaps$Temp, summ = TRUE)
#chilling(make_JDay(Winters_hours_gaps))
#chilling(make_JDay(Winters_hours_gaps), Start_JDay = 100, End_JDay = 200)
tempResponse(make_JDay(Winters_hours_gaps),
             Start_JDay = 100,
             End_JDay = 200,
             models = list(Chilling_Hours = Chilling_Hours,
                           Utah_Chill_Units = Utah_Model,
                           Chill_Portions = Dynamic_Model,
                           GDH = GDH,
                           own_step_model = own_step_model))
##      Season End_year Season_days Data_days Perc_complete Chilling_Hours
## 1 2007/2008     2008         101       101           100             50
##   Utah_Chill_Units Chill_Portions      GDH own_step_model
## 1          -1332.5       2.255033 31284.32          -1325

Making hourly temperatures (Chapter 8)

Exercises on hourly temperatures

  1. Choose a location of interest, find out its latitude and produce plots of daily sunrise, sunset and day length
  2. Produce an hourly data set, based on idealized daily curves, for the KA_weather data set (included in chillR)
  3. Produce empirical temperature curve parameters for the Winters_hours_gaps data set, and use them to predict hourly values from daily temperatures (this is very similar to the example above, but please make sure you understand what’s going on)
require(chillR)
require(reshape2)
require(ggplot2)

sun_path <- data.frame(JDay = 1:365,daylength(61.8, 1:365))
sun_path_long <- melt(sun_path, id = c("JDay"))
## Warning in melt(sun_path, id = c("JDay")): The melt generic in data.table has
## been passed a data.frame and will attempt to redirect to the relevant reshape2
## method; please note that reshape2 is deprecated, and this redirection is now
## deprecated as well. To continue using melt methods from reshape2 while both
## libraries are attached, e.g. melt.list, you can prepend the namespace like
## reshape2::melt(sun_path). In the next version, this warning will become an
## error.
sun_path_plot <- ggplot(sun_path_long, aes(x = JDay, y = value))+
  geom_line(aes(colour = variable))+
  xlab("Julian day")+
  ylab("Hours")+
  theme_bw(base_size = 15)
sun_path_plot

str(KA_weather)
## 'data.frame':    4534 obs. of  5 variables:
##  $ Year : int  1998 1998 1998 1998 1998 1998 1998 1998 1998 1998 ...
##  $ Month: int  1 1 1 1 1 1 1 1 1 1 ...
##  $ Day  : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ Tmax : num  8.2 9.1 10.4 8.4 7.7 8.1 12 11.2 13.9 14.5 ...
##  $ Tmin : num  5.1 5 3.3 4.5 4.5 4.4 6.9 8.6 8.5 3.6 ...
hour_temps_CKA_ideal <- stack_hourly_temps(KA_weather, 51)


str(Winters_hours_gaps)
## 'data.frame':    6074 obs. of  6 variables:
##  $ Year     : int  2008 2008 2008 2008 2008 2008 2008 2008 2008 2008 ...
##  $ Month    : int  3 3 3 3 3 3 3 3 3 3 ...
##  $ Day      : int  3 3 3 3 3 3 3 3 3 3 ...
##  $ Hour     : int  10 11 12 13 14 15 16 17 18 19 ...
##  $ Temp_gaps: num  15.1 17.2 18.7 18.7 18.8 ...
##  $ Temp     : num  15.1 17.2 18.7 18.7 18.8 ...
temp_coeffs <- Empirical_daily_temperature_curve(Winters_hours_gaps)
Winter_daily <- make_all_day_table(Winters_hours_gaps, input_timestep="hour")
hour_temps_winter_emp <- Empirical_hourly_temperatures(Winter_daily, temp_coeffs)


# # Self excerise: Interpolating data for a Fjäll in sweden (historical hourly data available)
# fjaell_a_hourly <- read.csv2(file = "data/smhi-opendata_1_112540_20211031_164311.csv",header = TRUE, dec = ".", sep = ";", quote = "", skip = 10)
# fjaell_a_hourly$date_time <- as.POSIXct(paste(fjaell_a_hourly$Datum, fjaell_a_hourly$"Tid..UTC."), tz = "UTM")
# names(fjaell_a_hourly)[names(fjaell_a_hourly)=="Lufttemperatur"] <- "Temp"
# 
# ggplot(fjaell_a_hourly, aes(x = date_time, y = Temp))+
#   geom_line()+
#   xlab("timeline")+
#   ylab("air temperate (°C)")+
#   theme_bw(base_size = 14)
# 
# 
# require(data.table)
# library(data.table)
# test <- c(as.POSIXct("1985-05-01 06:00:00", tz = "UTC"), as.POSIXct("1985-05-01 12:00:00", tz = "UTC"))
# year(test)
# 
# fjaell_a_hourly <- data.frame(fjaell_a_hourly, 
#                               Year = year(fjaell_a_hourly$date_time), 
#                               Month = month(fjaell_a_hourly$date_time), 
#                               Day = mday(fjaell_a_hourly$date_time), 
#                               Hour = hour(fjaell_a_hourly$date_time))
# 
# 
# fjaell_a_hourly_to_day <- make_all_day_table(fjaell_a_hourly, timestep = "hour")
# 
# #fjaell_a_hourly_inter <- interpolate_gaps_hourly(fjaell_a_hourly_to_day, latitude = 61.8)
# fjaell_a_hourly_inter <- read.csv("data/fjaell_a_hourly_data_interpolated.csv")
# fjaell_a_hourly_inter$date_time <- as.POSIXct(fjaell_a_hourly_inter$date_time)
# write.csv(fjaell_a_hourly_inter$weather, file = "data/fjaell_a_hourly_data_interpolated.csv", row.names = FALSE)
# #write.csv(fjaell_a_hourly_inter$daily_patch_report, file = "data/fjaell_a_hourly_data_interpolated_quality_check.csv", row.names = FALSE)
# 
# nrow(fjaell_a_hourly_inter[fjaell_a_hourly_inter$Tmin_source == "interpolated",]) + nrow(fjaell_a_hourly_inter[fjaell_a_hourly_inter$Tmax_source == "interpolated",]) 
# nrow(fjaell_a_hourly_inter[fjaell_a_hourly_inter$Tmin_source == "solved",]) + nrow(fjaell_a_hourly_inter[fjaell_a_hourly_inter$Tmax_source == "solved",])
# nrow(fjaell_a_hourly_inter[is.na(fjaell_a_hourly_inter$Tmin_source),]) + nrow(fjaell_a_hourly_inter[is.na(fjaell_a_hourly_inter$Tmax_source),])
# 
# ggplot(fjaell_a_hourly_inter, aes(x = date_time, y = Temp))+
#   geom_line()+
#   ggtitle("Weatherstation \"Fjäll A\" from Schweden (Latitude = 61.8)")+
#   xlab("timeline")+
#   ylab("air temperate (°C)")+
#   theme_bw(base_size = 14)

Getting temperature data (Chapter 9)

library(chillR)
## Porto: 41.149178,-8.612112; crs=wgs84
station_list <- handle_gsod(action = "list_stations",
                            location = c(y = 41.149178, x = -8.612112),
                            time_interval = c(1973, 2019))
weather <- handle_gsod(action = "download_weather",
                       location = "085450_99999",
                       time_interval = c(1973,2019))
weather_cleaned <- handle_gsod(weather)
weather_cleaned <- weather_cleaned$PORTO$weather
write.csv(weather_cleaned, file = "data/gsod_weather_porto.csv", row.names = FALSE)

Filling gaps in temperature records (Chapter 10)

library(chillR)
library(kableExtra)

## Porto: 41.149178,-8.612112; crs=wgs84
weather <- read.csv("data/gsod_weather_porto.csv")
fixed_weather <- fix_weather(weather)

kable(fixed_weather$QC, caption="Quality control summary produced by *fix_weather()*, with only winter days interpolated") %>%
  kable_styling("striped", position = "left", font_size = 10)
Quality control summary produced by fix_weather(), with only winter days interpolated
Season End_year Season_days Data_days Missing_Tmin Missing_Tmax Incomplete_days Perc_complete
1972/1973 1973 365 365 3 3 3 99.2
1973/1974 1974 365 365 1 0 1 99.7
1974/1975 1975 365 365 2 2 2 99.5
1975/1976 1976 366 366 2 3 3 99.2
1976/1977 1977 365 365 0 0 0 100.0
1977/1978 1978 365 365 0 0 0 100.0
1978/1979 1979 365 365 0 0 0 100.0
1979/1980 1980 366 366 0 0 0 100.0
1980/1981 1981 365 365 0 0 0 100.0
1981/1982 1982 365 365 4 4 4 98.9
1982/1983 1983 365 365 0 0 0 100.0
1983/1984 1984 366 366 0 0 0 100.0
1984/1985 1985 365 365 0 0 0 100.0
1985/1986 1986 365 365 0 0 0 100.0
1986/1987 1987 365 365 0 0 0 100.0
1987/1988 1988 366 366 0 0 0 100.0
1988/1989 1989 365 365 0 0 0 100.0
1989/1990 1990 365 365 2 2 2 99.5
1990/1991 1991 365 365 5 5 5 98.6
1991/1992 1992 366 366 1 1 1 99.7
1992/1993 1993 365 365 0 0 0 100.0
1993/1994 1994 365 365 0 0 0 100.0
1994/1995 1995 365 365 0 0 0 100.0
1995/1996 1996 366 366 0 0 0 100.0
1996/1997 1997 365 365 0 0 0 100.0
1997/1998 1998 365 365 0 0 0 100.0
1998/1999 1999 365 365 1 1 1 99.7
1999/2000 2000 366 366 0 0 0 100.0
2000/2001 2001 365 365 0 0 0 100.0
2001/2002 2002 365 365 0 0 0 100.0
2002/2003 2003 365 365 0 0 0 100.0
2003/2004 2004 366 366 0 0 0 100.0
2004/2005 2005 365 365 0 0 0 100.0
2005/2006 2006 365 365 0 0 0 100.0
2006/2007 2007 365 365 0 0 0 100.0
2007/2008 2008 366 366 0 0 0 100.0
2008/2009 2009 365 365 0 0 0 100.0
2009/2010 2010 365 365 0 0 0 100.0
2010/2011 2011 365 365 0 0 0 100.0
2011/2012 2012 366 366 0 0 0 100.0
2012/2013 2013 365 365 0 0 0 100.0
2013/2014 2014 365 365 0 0 0 100.0
2014/2015 2015 365 365 0 0 0 100.0
2015/2016 2016 366 366 0 0 0 100.0
2016/2017 2017 365 365 0 0 0 100.0
2017/2018 2018 365 365 0 1 1 99.7
2018/2019 2019 365 365 1 1 1 99.7
# Download data for nearby weather stations to substitute from on large gaps
station_list <- handle_gsod(action = "list_stations",
                            location = c(y = 41.149178, x = -8.612112),
                            time_interval = c(1973, 2019))

# Select all Stations closer than 50km 
selected_stations_chillR_code <- station_list[station_list$distance <= 50 & station_list$Overlap_years > 0, "chillR_code"]

patch_weather <- list()
for(i in 1:length(selected_stations_chillR_code)){
  station_name <- station_list[selected_stations_chillR_code[i]==station_list$chillR_code, "STATION.NAME"]
  print(station_name)
  patch_weather <- append(patch_weather, list(handle_gsod(handle_gsod(action="download_weather",
                                                                      location=selected_stations_chillR_code[i],
                                                                      time_interval=c(1973,2019))[[1]]$weather)))
  if(length(patch_weather) < i){
    patch_weather[[i]] <- "No Data"
  }
  names(patch_weather)[i] <- station_name
}
## [1] "PORTO/SERRA DO PILAR"
## [1] "PORTO"
## [1] "OVAR/MACEDA"
## [1] "OVAR"
# Remove No Data values from list
patch_weather <-patch_weather[unlist(lapply(patch_weather, is.data.frame), use.names = FALSE)]

patched <- patch_daily_temperatures(weather = weather,
                                    patch_weather = patch_weather)
porto_weather <- fix_weather(patched)
write.csv(porto_weather$weather,file = "data/porto_weather.csv", row.names = FALSE)

Generating temperature scenarios (Chapter 11)

# Weather Generator
library(chillR)
library(tidyverse)
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.1 ──
## ✓ tibble  3.1.6     ✓ dplyr   1.0.8
## ✓ tidyr   1.2.0     ✓ stringr 1.4.0
## ✓ readr   2.1.2     ✓ forcats 0.5.1
## ✓ purrr   0.3.4
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## x dplyr::between()    masks data.table::between()
## x dplyr::filter()     masks stats::filter()
## x dplyr::first()      masks data.table::first()
## x dplyr::group_rows() masks kableExtra::group_rows()
## x dplyr::lag()        masks stats::lag()
## x dplyr::last()       masks data.table::last()
## x purrr::transpose()  masks data.table::transpose()
library(ggplot2)

porto_weather <- read.csv("data/porto_weather.csv")
porto_weather <- porto_weather[,c("Year","Month","Day","Tmin","Tmax")]

temperature <- temperature_generation(porto_weather,
                                      years = c(1973, 2019),
                                      sim_years = c(2000, 2099))
## Warning: scenario doesn't contain named elements - consider using the following
## element names: 'data', 'reference_year','scenario_type','labels'
## Warning: setting scenario_type to 'relative'
## Warning: Reference year missing - can't check if relative temperature scenario
## is valid
temperature_compare <- cbind(porto_weather[
  which(porto_weather$Year %in% 1973:2019),],
  data_source="observed")

temperature_compare <- rbind(temperature_compare, 
                           cbind(temperature[[1]][,c("Year","Month","Day","Tmin","Tmax")],
                                 data_source="simulated"))

temperature_compare$date <- as.Date(ISOdate(2000, 
                                      temperature_compare$Month,
                                      temperature_compare$Day))

ggplot(data=temperature_compare, aes(date,Tmin)) +
  geom_smooth(aes(colour = factor(Year))) +
  facet_wrap(vars(data_source)) +
  theme_bw(base_size = 20) +
  theme(legend.position = "none") +
  scale_x_date(date_labels = "%b")
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'

ggplot(data=temperature_compare, aes(date,Tmax)) +
  geom_smooth(aes(colour = factor(Year))) +
  facet_wrap(vars(data_source)) +
  theme_bw(base_size = 20) +
  theme(legend.position = "none") +
  scale_x_date(date_labels = "%b")
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'

# Chill distribution
chill_observed <- chilling(stack_hourly_temps(weather = temperature_compare[which(temperature_compare$data_source=="observed"),],
                                              latitude = 41.1),
                           Start_JDay = 305,
                           End_JDay = 59)
chill_simulated <- chilling(stack_hourly_temps(weather = temperature_compare[which(temperature_compare$data_source=="simulated"),],
                                              latitude = 41.1),
                            Start_JDay = 305,
                            End_JDay = 59)

chill_comparison <- cbind(chill_observed, data_source = "observed")
chill_comparison <- rbind(chill_comparison, cbind(chill_simulated, data_source = "simulated"))

chill_comparison_full_season <- chill_comparison[which(chill_comparison$Perc_complete == 100),]

# The accumulated chill portions for the full season 
ggplot(chill_comparison_full_season, aes(x=Chill_portions)) + 
  geom_histogram(binwidth=1,aes(fill = factor(data_source))) +
  theme_bw(base_size = 20) +
  labs(fill = "Data source") +
  xlab("Chill accumulation (Chill Portions)") +
  ylab("Frequency")

# Probability of reaching a specific number of chill portion
chill_simulations<-chill_comparison_full_season[which(chill_comparison_full_season$data_source=="simulated"),]
ggplot(chill_simulations, aes(x=Chill_portions)) +
  stat_ecdf(geom = "step",lwd=1.5,col="blue") +
  ylab("Cumulative probability") +
  xlab("Chill accumulation (in Chill Portions)") +
  theme_bw(base_size = 20)

## Historic temperature scenarios (Chapter 13)

library(chillR)
library(ggplot2)

# Load data
porto_weather <- read.csv("data/porto_weather.csv")
porto_weather <- porto_weather[,c("Year","Month","Day","Tmin","Tmax")]

baseline_scenario <- temperature_scenario_from_records(weather = porto_weather,
                                                       year = 1996)
all_past_scenarios <- temperature_scenario_from_records(weather = porto_weather,
                                                        year = c(1975, 1985, 1995, 2005, 2015))
adjusted_scenarios <- temperature_scenario_baseline_adjustment(baseline_temperature_scenario = baseline_scenario,
                                                               temperature_scenario = all_past_scenarios)

all_scenario_temps <- temperature_generation(porto_weather,
                                      years = c(1973, 2019),
                                      sim_years = c(2000,2099),
                                      temperature_scenario = adjusted_scenarios)

chill_hist_scenario_list <- tempResponse_daily_list(all_scenario_temps,
                                                  latitude=41.1,
                                                  Start_JDay = 305,
                                                  End_JDay = 59)
scenarios <- names(chill_hist_scenario_list)[1:6]
all_scenarios<-chill_hist_scenario_list[[scenarios[1]]]
all_scenarios[,"scenario"]<-as.numeric(scenarios[1])

# Loop through the other scenarios
for (sc in scenarios[2:5]){
  all_scenarios<-rbind(all_scenarios,
                       cbind(chill_hist_scenario_list[[sc]],scenario=as.numeric(sc)))
}

all_scenarios<-all_scenarios[which(all_scenarios$Perc_complete==100),]

# Actual chill metrics in this period for comparison
actual_chill<-tempResponse_daily_list(porto_weather, latitude=41.1,
                                      Start_JDay = 305,
                                      End_JDay = 59)[[1]]

actual_chill<-actual_chill[which(actual_chill$Perc_complete==100),]

# There doesn't seem to be a clear trend in chill portion changes over the years
ggplot(data = all_scenarios, aes(scenario, Chill_Portions, fill = factor(scenario)))+
  geom_violin()+ 
  ylab("Chill accumulation (Chill Portions)") +
  xlab("Scenario year") +
  theme_bw(base_size=15)+ 
  geom_point(data=actual_chill,
             aes(End_year,Chill_Portions,fill="blue"),
             col="blue",show.legend = FALSE)+
  scale_fill_discrete(name="Scenario", breaks = unique(all_scenarios$scenario))

## Future temperature scenarios (Chapter 14)

library(chillR)

# Loading downloaded data set
porto_weather <- read.csv("data/porto_weather.csv")

# Get Climate secenario data
RCPs <- c("rcp45", "rcp85")
Times <- c(2050, 2085)


for (RCP in RCPs)
  for (Time in Times)
  {
    start_year <- Time - 15
    end_year <- Time + 15
    clim_scen <- getClimateWizardData(
      c(longitude = -8.612112, latitude = 41.149178),
      RCP,
      start_year,
      end_year,
      temperature_generation_scenarios = TRUE,
      baseline = c(1975, 2005),
      metric = "monthly_min_max_temps",
      GCMs = "all"
    )
    save_temperature_scenarios(clim_scen,
                               "data/ClimateWizard",
                               paste0("porto_futures_", Time, "_", RCP))
  }

# Baseline adjustment
scenario_1990 <- temperature_scenario_from_records(porto_weather, 1990)
scenario_1996 <- temperature_scenario_from_records(porto_weather, 1996)
adjustment_scenario <- temperature_scenario_baseline_adjustment(scenario_1996,
                                           scenario_1990)
#adjustment_scenario

RCPs<-c("rcp45","rcp85")
Times<-c(2050,2085)
for(RCP in RCPs)
  for(Time in Times)
  {
    clim_scen<-load_ClimateWizard_scenarios(
      "data/ClimateWizard",
      paste0("porto_futures_",Time,"_",RCP))
    clim_scen_adjusted<-
      temperature_scenario_baseline_adjustment(
        baseline_temperature_scenario=adjustment_scenario,
        temperature_scenario=clim_scen)
    Temps<-temperature_generation(
      weather=porto_weather, 
      years = c(1973, 2019),
      sim_years=c(2001, 2101),
      temperature_scenario = clim_scen_adjusted)
    
    save_temperature_scenarios(
      Temps,
      "data/Weather",
      paste0("porto_",Time,"_",RCP))
  }

# Adding historic scenarios
all_past_scenarios<-temperature_scenario_from_records(
  weather=porto_weather,
  year=c(1980,1990,2000,2010))

adjusted_scenarios<-temperature_scenario_baseline_adjustment(
  baseline=scenario_1996,
  temperature_scenario = all_past_scenarios)

all_past_scenario_temps <- temperature_generation(
  weather = porto_weather,
  years = c(1973, 2019),
  sim_years = c(2001, 2101),
  temperature_scenario = adjusted_scenarios
)

save_temperature_scenarios(all_past_scenario_temps,
                           "data/Weather",
                           "porto_historic")

# Add our own and some existing models
frost_model <- function(x)
  step_model(x,
             data.frame(
               lower = c(-1000, 0),
               upper = c(0, 1000),
               weight = c(1, 0)
             ))

models <- list(Chill_CP = Dynamic_Model,
               Heat_GDH = GDH,
               Frost_H = frost_model)

# Calculate chill for observed and historic scenarios
Temps <- load_temperature_scenarios("data/Weather", "porto_historic")
chill_past_scenarios <- tempResponse_daily_list(
  Temps,
  latitude = 41.149178,
  Start_JDay = 305,
  End_JDay = 59,
  models = models,
  misstolerance = 10
)
chill_observed <- tempResponse_daily_list(
  porto_weather,
  latitude = 41.149178,
  Start_JDay = 305,
  End_JDay = 59,
  models = models,
  misstolerance = 10
)

save_temperature_scenarios(chill_past_scenarios,
                           "data/chill",
                           "porto_historic")
save_temperature_scenarios(chill_observed,
                           "data/chill",
                           "porto_observed")

# Plot chill scenarios
chill_past_scenarios<-load_temperature_scenarios(
  "data/chill",
  "porto_historic")
chill_observed<-load_temperature_scenarios(
  "data/chill",
  "porto_observed")

chills <- make_climate_scenario(
  chill_past_scenarios,
  caption = "Historic",
  historic_data = chill_observed,
  time_series = TRUE
)

plot_climate_scenarios(
  climate_scenario_list=chills,
  metric="Chill_CP",
  metric_label="Chill (Chill Portions)")

## [[1]]
## [1] "time series labels"
# Apply same procedure on future data
for(RCP in RCPs)
  for(Time in Times)
  {
    Temps<-load_temperature_scenarios(
      "data/Weather",
      paste0("porto_",Time,"_",RCP))
    chill<-tempResponse_daily_list(
      Temps,
      latitude=41.149178,
      Start_JDay = 305,
      End_JDay = 59,
      models=models,
      misstolerance = 10)
    save_temperature_scenarios(
      chill,
      "data/chill",
      paste0("porto_",Time,"_",RCP))
  }

# Name each scenario

for(RCP in RCPs)
  for(Time in Times)
  {
    chill<-load_temperature_scenarios(
      "data/chill",
      paste0("porto_",Time,"_",RCP))
    if(RCP=="rcp45") RCPcaption <- "RCP4.5"
    if(RCP=="rcp85") RCPcaption <- "RCP8.5"
    if(Time=="2050") Time_caption <- "2050"
    if(Time=="2085") Time_caption <- "2085"
    chills <-make_climate_scenario(
      chill,
      caption =c(RCPcaption, Time_caption),
      add_to = chills)
  }

# Plotting each chill model
# Chilling Portions

plot_climate_scenarios(
  climate_scenario_list=chills,
  metric="Chill_CP",
  metric_label="Chill (Chill Portions)",
  texcex=1.5)

## [[1]]
## [1] "time series labels"
## 
## [[2]]
##    code         Label
## 1     1    bcc-csm1-1
## 2     2       BNU-ESM
## 3     3       CanESM2
## 4     4     CESM1-BGC
## 5     5     MIROC-ESM
## 6     6      CNRM-CM5
## 7     7     ACCESS1-0
## 8     8 CSIRO-Mk3-6-0
## 9     9      GFDL-CM3
## 10   10    GFDL-ESM2G
## 11   11    GFDL-ESM2M
## 12   12        inmcm4
## 13   13  IPSL-CM5A-LR
## 14   14  IPSL-CM5A-MR
## 15   15         CCSM4
## 
## [[3]]
##    code         Label
## 1     1    bcc-csm1-1
## 2     2       BNU-ESM
## 3     3       CanESM2
## 4     4     CESM1-BGC
## 5     5     MIROC-ESM
## 6     6      CNRM-CM5
## 7     7     ACCESS1-0
## 8     8 CSIRO-Mk3-6-0
## 9     9      GFDL-CM3
## 10   10    GFDL-ESM2G
## 11   11    GFDL-ESM2M
## 12   12        inmcm4
## 13   13  IPSL-CM5A-LR
## 14   14  IPSL-CM5A-MR
## 15   15         CCSM4
## 
## [[4]]
##    code         Label
## 1     1    bcc-csm1-1
## 2     2       BNU-ESM
## 3     3       CanESM2
## 4     4     CESM1-BGC
## 5     5     MIROC-ESM
## 6     6      CNRM-CM5
## 7     7     ACCESS1-0
## 8     8 CSIRO-Mk3-6-0
## 9     9      GFDL-CM3
## 10   10    GFDL-ESM2G
## 11   11    GFDL-ESM2M
## 12   12        inmcm4
## 13   13  IPSL-CM5A-LR
## 14   14  IPSL-CM5A-MR
## 15   15         CCSM4
## 
## [[5]]
##    code         Label
## 1     1    bcc-csm1-1
## 2     2       BNU-ESM
## 3     3       CanESM2
## 4     4     CESM1-BGC
## 5     5     MIROC-ESM
## 6     6      CNRM-CM5
## 7     7     ACCESS1-0
## 8     8 CSIRO-Mk3-6-0
## 9     9      GFDL-CM3
## 10   10    GFDL-ESM2G
## 11   11    GFDL-ESM2M
## 12   12        inmcm4
## 13   13  IPSL-CM5A-LR
## 14   14  IPSL-CM5A-MR
## 15   15         CCSM4
# Growing Degree Dates

plot_climate_scenarios(
  climate_scenario_list=chills,
  metric="Heat_GDH",
  metric_label="Heat (Growing Degree Hours)",
  texcex=1.5)

## [[1]]
## [1] "time series labels"
## 
## [[2]]
##    code         Label
## 1     1    bcc-csm1-1
## 2     2       BNU-ESM
## 3     3       CanESM2
## 4     4     CESM1-BGC
## 5     5     MIROC-ESM
## 6     6      CNRM-CM5
## 7     7     ACCESS1-0
## 8     8 CSIRO-Mk3-6-0
## 9     9      GFDL-CM3
## 10   10    GFDL-ESM2G
## 11   11    GFDL-ESM2M
## 12   12        inmcm4
## 13   13  IPSL-CM5A-LR
## 14   14  IPSL-CM5A-MR
## 15   15         CCSM4
## 
## [[3]]
##    code         Label
## 1     1    bcc-csm1-1
## 2     2       BNU-ESM
## 3     3       CanESM2
## 4     4     CESM1-BGC
## 5     5     MIROC-ESM
## 6     6      CNRM-CM5
## 7     7     ACCESS1-0
## 8     8 CSIRO-Mk3-6-0
## 9     9      GFDL-CM3
## 10   10    GFDL-ESM2G
## 11   11    GFDL-ESM2M
## 12   12        inmcm4
## 13   13  IPSL-CM5A-LR
## 14   14  IPSL-CM5A-MR
## 15   15         CCSM4
## 
## [[4]]
##    code         Label
## 1     1    bcc-csm1-1
## 2     2       BNU-ESM
## 3     3       CanESM2
## 4     4     CESM1-BGC
## 5     5     MIROC-ESM
## 6     6      CNRM-CM5
## 7     7     ACCESS1-0
## 8     8 CSIRO-Mk3-6-0
## 9     9      GFDL-CM3
## 10   10    GFDL-ESM2G
## 11   11    GFDL-ESM2M
## 12   12        inmcm4
## 13   13  IPSL-CM5A-LR
## 14   14  IPSL-CM5A-MR
## 15   15         CCSM4
## 
## [[5]]
##    code         Label
## 1     1    bcc-csm1-1
## 2     2       BNU-ESM
## 3     3       CanESM2
## 4     4     CESM1-BGC
## 5     5     MIROC-ESM
## 6     6      CNRM-CM5
## 7     7     ACCESS1-0
## 8     8 CSIRO-Mk3-6-0
## 9     9      GFDL-CM3
## 10   10    GFDL-ESM2G
## 11   11    GFDL-ESM2M
## 12   12        inmcm4
## 13   13  IPSL-CM5A-LR
## 14   14  IPSL-CM5A-MR
## 15   15         CCSM4
# Frost hours

plot_climate_scenarios(
  climate_scenario_list=chills,
  metric="Frost_H",
  metric_label="Frost hours",
  texcex=1.5)

## [[1]]
## [1] "time series labels"
## 
## [[2]]
##    code         Label
## 1     1    bcc-csm1-1
## 2     2       BNU-ESM
## 3     3       CanESM2
## 4     4     CESM1-BGC
## 5     5     MIROC-ESM
## 6     6      CNRM-CM5
## 7     7     ACCESS1-0
## 8     8 CSIRO-Mk3-6-0
## 9     9      GFDL-CM3
## 10   10    GFDL-ESM2G
## 11   11    GFDL-ESM2M
## 12   12        inmcm4
## 13   13  IPSL-CM5A-LR
## 14   14  IPSL-CM5A-MR
## 15   15         CCSM4
## 
## [[3]]
##    code         Label
## 1     1    bcc-csm1-1
## 2     2       BNU-ESM
## 3     3       CanESM2
## 4     4     CESM1-BGC
## 5     5     MIROC-ESM
## 6     6      CNRM-CM5
## 7     7     ACCESS1-0
## 8     8 CSIRO-Mk3-6-0
## 9     9      GFDL-CM3
## 10   10    GFDL-ESM2G
## 11   11    GFDL-ESM2M
## 12   12        inmcm4
## 13   13  IPSL-CM5A-LR
## 14   14  IPSL-CM5A-MR
## 15   15         CCSM4
## 
## [[4]]
##    code         Label
## 1     1    bcc-csm1-1
## 2     2       BNU-ESM
## 3     3       CanESM2
## 4     4     CESM1-BGC
## 5     5     MIROC-ESM
## 6     6      CNRM-CM5
## 7     7     ACCESS1-0
## 8     8 CSIRO-Mk3-6-0
## 9     9      GFDL-CM3
## 10   10    GFDL-ESM2G
## 11   11    GFDL-ESM2M
## 12   12        inmcm4
## 13   13  IPSL-CM5A-LR
## 14   14  IPSL-CM5A-MR
## 15   15         CCSM4
## 
## [[5]]
##    code         Label
## 1     1    bcc-csm1-1
## 2     2       BNU-ESM
## 3     3       CanESM2
## 4     4     CESM1-BGC
## 5     5     MIROC-ESM
## 6     6      CNRM-CM5
## 7     7     ACCESS1-0
## 8     8 CSIRO-Mk3-6-0
## 9     9      GFDL-CM3
## 10   10    GFDL-ESM2G
## 11   11    GFDL-ESM2M
## 12   12        inmcm4
## 13   13  IPSL-CM5A-LR
## 14   14  IPSL-CM5A-MR
## 15   15         CCSM4

Plotting future Scenarios (Chapter 15)

Exercises on getting temperature data

  1. Produce similar plots for the weather station you selected for earlier exercises.

Load chill scenarios and annotate them.

chill_past_scenarios <- load_temperature_scenarios("data/chill",
                                                   "porto_historic")
chill_observed <- load_temperature_scenarios("data/chill",
                                             "porto_observed")

chills <- make_climate_scenario(
   chill_past_scenarios,
   caption = "Historic",
   historic_data = chill_observed,
   time_series = TRUE
)

RCPs <- c("rcp45", "rcp85")
Times <- c(2050, 2085)

for (RCP in RCPs)
   for (Time in Times)
   {
      chill <- load_temperature_scenarios("data/chill",
                                          paste0("porto_", Time, "_", RCP))
      if (RCP == "rcp45")
         RCPcaption <- "RCP4.5"
      if (RCP == "rcp85")
         RCPcaption <- "RCP8.5"
      if (Time == "2050")
         Time_caption <- "2050"
      if (Time == "2085")
         Time_caption <- "2085"
      chills <- make_climate_scenario(chill,
                                      caption = c(RCPcaption, Time_caption),
                                      add_to = chills)
   }

Reformat data into a ggplot suitable form.

# Iterate through each chill Scenario and name it.
for(nam in names(chills[[1]]$data))
{
   ch <- chills[[1]]$data[[nam]]
   ch[, "GCM"] <- "none"
   ch[, "RCP"] <- "none"
   ch[, "Year"] <- as.numeric(nam)
   if (nam == names(chills[[1]]$data)[1])
      past_simulated <- ch
   else
      past_simulated <- rbind(past_simulated, ch)
}

# Add value 'Historic' for the new 'Scenario' column
past_simulated["Scenario"] <- "Historic"

# Rename the historic data for convenience
past_observed <- chills[[1]][["historic_data"]]

# Same for future data
for (i in 2:length(chills))
   for (nam in names(chills[[i]]$data))
   {
      ch <- chills[[i]]$data[[nam]]
      ch[, "GCM"] <- nam
      ch[, "RCP"] <- chills[[i]]$caption[1]
      ch[, "Year"] <- chills[[i]]$caption[2]
      if (i == 2 & nam == names(chills[[i]]$data)[1])
         future_data <- ch
      else
         future_data <- rbind(future_data, ch)
   }
source(file = "treephenology_functions.R")
plot_scenarios_gg(past_observed=past_observed,
                  past_simulated=past_simulated,
                  future_data=future_data,
                  metric="Heat_GDH",
                  axis_label="Heat (in Growing Degree Hours)")

plot_scenarios_gg(past_observed=past_observed,
                  past_simulated=past_simulated,
                  future_data=future_data,
                  metric="Chill_CP",
                  axis_label="Chill (in Chill Portions)")

plot_scenarios_gg(past_observed=past_observed,
                  past_simulated=past_simulated,
                  future_data=future_data,
                  metric="Frost_H",
                  axis_label="Frost duration (in hours)")

Chill model comparison (Chapter 16)

Exercises on chill model comparison

  1. Perform a similar analysis for the location you’ve chosen for your exercises.
# Model Collection we are using
hourly_models <- list(
   Chilling_units = chilling_units,
   Low_chill = low_chill_model,
   Modified_Utah = modified_utah_model,
   North_Carolina = north_carolina_model,
   Positive_Utah = positive_utah_model,
   Chilling_Hours = Chilling_Hours,
   Utah_Chill_Units = Utah_Model,
   Chill_Portions = Dynamic_Model
)
daily_models <- list(
   Rate_of_Chill = rate_of_chill,
   Chill_Days = chill_days,
   Exponential_Chill = exponential_chill,
   Triangula_Chill_Haninnen = triangular_chill_1,
   Triangular_Chill_Legave = triangular_chill_2
)

metrics <- c(names(daily_models), names(hourly_models))

model_labels = c(
   "Rate of Chill",
   "Chill Days",
   "Exponential Chill",
   "Triangular Chill (Häninnen)",
   "Triangular Chill (Legave)",
   "Chilling Units",
   "Low-Chill Chill Units",
   "Modified Utah Chill Units",
   "North Carolina Chill Units",
   "Positive Utah Chill Units",
   "Chilling Hours",
   "Utah Chill Units",
   "Chill Portions"
)

data.frame(Metric = model_labels, 'Function name' = metrics)
##                         Metric            Function.name
## 1                Rate of Chill            Rate_of_Chill
## 2                   Chill Days               Chill_Days
## 3            Exponential Chill        Exponential_Chill
## 4  Triangular Chill (Häninnen) Triangula_Chill_Haninnen
## 5    Triangular Chill (Legave)  Triangular_Chill_Legave
## 6               Chilling Units           Chilling_units
## 7        Low-Chill Chill Units                Low_chill
## 8    Modified Utah Chill Units            Modified_Utah
## 9   North Carolina Chill Units           North_Carolina
## 10   Positive Utah Chill Units            Positive_Utah
## 11              Chilling Hours           Chilling_Hours
## 12            Utah Chill Units         Utah_Chill_Units
## 13              Chill Portions           Chill_Portions
# Apply all Models to historic weather data
porto_temps <- read_tab("data/porto_weather.csv")
Temps <-
   load_temperature_scenarios("data/Weather", "porto_historic")

Start_JDay <- 305
End_JDay <- 59

# apply daily models to past scenarios

daily_models_past_scenarios <- tempResponse_list_daily(Temps,
                                                       Start_JDay = Start_JDay,
                                                       End_JDay = End_JDay,
                                                       models = daily_models)
daily_models_past_scenarios <- lapply(daily_models_past_scenarios,
                                      function(x)
                                         x[which(x$Perc_complete > 90),])

# apply hourly models to past scenarios

hourly_models_past_scenarios <- tempResponse_daily_list(
   Temps,
   latitude = 41.149178,
   Start_JDay = Start_JDay,
   End_JDay = End_JDay,
   models = hourly_models,
   misstolerance = 10
)

past_scenarios <- daily_models_past_scenarios
past_scenarios <- lapply(names(past_scenarios),
                         function(x)
                            cbind(past_scenarios[[x]],
                                  hourly_models_past_scenarios[[x]][, names(hourly_models)]))
names(past_scenarios) <- names(daily_models_past_scenarios)

# apply daily models to past observations

daily_models_observed <- tempResponse_daily(
   porto_temps,
   Start_JDay = Start_JDay,
   End_JDay = End_JDay,
   models = daily_models
)
daily_models_observed <-
   daily_models_observed[which(daily_models_observed$Perc_complete > 90),]

# apply hourly models to past observations

hourly_models_observed <- tempResponse_daily_list(
   porto_temps,
   latitude = 41.149178,
   Start_JDay = Start_JDay,
   End_JDay = End_JDay,
   models = hourly_models,
   misstolerance = 10
)

past_observed <- cbind(daily_models_observed,
                       hourly_models_observed[[1]][, names(hourly_models)])

# save all the results

save_temperature_scenarios(past_scenarios,
                           "data/chill",
                           "porto_multichill_historic")
write.csv(past_observed,
          "data/chill/porto_multichill_observed.csv",
          row.names = FALSE)

# Future
RCPs <- c("rcp45", "rcp85")
Times <- c(2050, 2085)

for (RCP in RCPs)
   for (Time in Times)
   {
      Temps <- load_temperature_scenarios("data/Weather",
                                          paste0("porto_", Time, "_", RCP))
      
      daily_models_future_scenarios <-
         tempResponse_list_daily(Temps,
                                 Start_JDay = Start_JDay,
                                 End_JDay = End_JDay,
                                 models = daily_models)
      daily_models_future_scenarios <-
         lapply(daily_models_future_scenarios,
                function(x)
                   x[which(x$Perc_complete > 90),])
      hourly_models_future_scenarios <-
         tempResponse_daily_list(
            Temps,
            latitude = 41.149178,
            Start_JDay = Start_JDay,
            End_JDay = End_JDay,
            models = hourly_models,
            misstolerance = 10
         )
      
      future_scenarios <- daily_models_future_scenarios
      future_scenarios <- lapply(names(future_scenarios),
                                 function(x)
                                    cbind(future_scenarios[[x]],
                                          hourly_models_future_scenarios[[x]][, names(hourly_models)]))
      names(future_scenarios) <-
         names(daily_models_future_scenarios)
      
      chill <- future_scenarios
      save_temperature_scenarios(chill,
                                 "data/chill",
                                 paste0("porto_multichill_", Time, "_", RCP))
   }
  1. Make a heat map illustrating past and future changes in Safe Winter Chill, relative to a past scenario, for the 13 chill models used here.
# Load multichill data and annotate it
chill_past_scenarios <- load_temperature_scenarios("data/chill",
                                                   "porto_multichill_historic")
chill_observed <- read_tab("data/chill/porto_multichill_observed.csv")

chills <- make_climate_scenario(
   chill_past_scenarios,
   caption = "Historic",
   historic_data = chill_observed,
   time_series = TRUE
)
RCPs <- c("rcp45", "rcp85")
Times <- c(2050, 2085)

for (RCP in RCPs)
   for (Time in Times)
   {
      chill <- load_temperature_scenarios("data/chill",
                                          paste0("Bonn_multichill_", Time, "_", RCP))
      if (RCP == "rcp45")
         RCPcaption <- "RCP4.5"
      if (RCP == "rcp85")
         RCPcaption <- "RCP8.5"
      if (Time == "2050")
         Time_caption <- "2050"
      if (Time == "2085")
         Time_caption <- "2085"
      chills <- make_climate_scenario(chill,
                                      caption = c(RCPcaption, Time_caption),
                                      add_to = chills)
   }

# Compute safe winter chill (SWC)

for (i in 1:length(chills))
{
   ch <- chills[[i]]
   if (ch$caption[1] == "Historic")
   {
      GCMs <- rep("none", length(names(ch$data)))
      RCPs <- rep("none", length(names(ch$data)))
      Years <- as.numeric(ch$labels)
      Scenario <- rep("Historic", length(names(ch$data)))
   } else
   {
      GCMs <- names(ch$data)
      RCPs <- rep(ch$caption[1], length(names(ch$data)))
      Years <- rep(as.numeric(ch$caption[2]), length(names(ch$data)))
      Scenario <- rep("Future", length(names(ch$data)))
   }
   for (nam in names(ch$data))
   {
      for (met in metrics)
      {
         temp_res <- data.frame(
            Metric = met,
            GCM = GCMs[which(nam == names(ch$data))],
            RCP = RCPs[which(nam == names(ch$data))],
            Year = Years[which(nam == names(ch$data))],
            Result = quantile(ch$data[[nam]][, met], 0.1),
            Scenario = Scenario[which(nam == names(ch$data))]
         )
         if (i == 1 & nam == names(ch$data)[1] & met == metrics[1])
            results <- temp_res
         else
            results <- rbind(results, temp_res)
         
      }
   }
}

for (met in metrics)
   results[which(results$Metric == met), "SWC"] <-
   results[which(results$Metric == met), "Result"] /
   results[which(results$Metric == met &
                    results$Year == 1980), "Result"] - 1

# Plotting future data
rng <- range(results$SWC)

p_future <- ggplot(results[which(!results$GCM == "none"), ],
                   aes(GCM, y = factor(Metric, levels = metrics),
                       fill = SWC)) +
   geom_tile() +
   facet_grid(RCP ~ Year) +
   theme_bw(base_size = 11) +
   theme(axis.text = element_text(size = 8)) +
   scale_fill_gradientn(colours = matlab.like(15),
                        labels = scales::percent,
                        limits = rng) +
   theme(axis.text.x = element_text(
      angle = 75,
      hjust = 1,
      vjust = 1)) +
   labs(fill = "Change in\nSafe Winter Chill\nsince 1975") +
   scale_y_discrete(labels = model_labels) +
   ylab("Chill metric")


# Plotting historic data

p_past <-
   ggplot(results[which(results$GCM == "none"), ],
          aes(Year, y = factor(Metric, levels = metrics),
              fill = SWC)) +
   geom_tile() +
   theme_bw(base_size = 11) +
   theme(axis.text = element_text(size = 8)) +
   scale_fill_gradientn(colours = matlab.like(15),
                        labels = scales::percent,
                        limits = rng) +
   scale_x_continuous(position = "top") +
   labs(fill = "Change in\nSafe Winter Chill\nsince 1975") +
   scale_y_discrete(labels = model_labels) +
   ylab("Chill metric")


# Combine both plots
chill_comp_plot <-
   (p_past +
       p_future +
       plot_layout(
          guides = "collect",
          nrow = 2,
          heights = c(1, 2)
       )) &
   theme(
      legend.position = "right",
      strip.background = element_blank(),
      strip.text = element_text(face = "bold")
   )

chill_comp_plot

  1. Produce an animated line plot of your results (summarizing Safe Winter Chill across all the GCMs).
hist_results<-results[which(results$GCM=="none"),]
hist_results$RCP<-"RCP4.5"
hist_results_2<-hist_results
hist_results_2$RCP<-"RCP8.5"
hist_results<-rbind(hist_results,hist_results_2)

future_results<-results[which(!results$GCM=="none"),]

GCM_aggregate<-aggregate(
  future_results$SWC,
  by=list(future_results$Metric,future_results$RCP,future_results$Year),
  FUN=mean)
colnames(GCM_aggregate)<-c("Metric","RCP","Year","SWC")

RCP_Time_series<-rbind(hist_results[,c("Metric","RCP","Year","SWC")],
                       GCM_aggregate)

# Now we make a static plot of chill development over time according to all the 
#   chill models.

chill_change_plot<-
  ggplot(data=RCP_Time_series,
         aes(x=Year,y=SWC,col=factor(Metric,levels=metrics))) +
  geom_line(lwd=1.3) +
  facet_wrap(~RCP,nrow=2) +
  theme_bw(base_size=15) +
  labs(col = "Change in\nSafe Winter Chill\nsince 1975") +
  scale_color_discrete(labels=model_labels) +
  scale_y_continuous(labels = scales::percent) +
  theme(strip.background = element_blank(),
        strip.text = element_text(face = "bold")) +
  ylab("Safe Winter Chill")


# Animate plot
animation <- chill_change_plot + transition_reveal(Year)
animate(animation, renderer = gifski_renderer())

# Saving the gif
anim_save("chill_comparison_animation.gif", path = "data", animation = last_animation())

Simple phenology analysis (Chapter 17)

Exercises on simple phenology analysis

  1. Provide a brief narrative describing what p-hacking is, and why this is a problematic approach to data analysis.
    • P-hacking describes the optimization of a particular model or result for the sake of improving the statistical parameters (overfitting) but neglecting the original goal of explaining reality as best and simple as possible. One should always have the end goal in mind and should not analyze data for the sake of an analysis.
  2. Provide a sketch of your causal understanding of the relationship between temperature and bloom dates.
    • When looking at phenology, it is easy to the correlation between time and phenology. Blooming always happens in spring, doesn’t it?! Well yes, but actually no. How come that we can let fruits bloom in winter using a (heated) greenhouse then? The actual reason for blooming can be explained better using temperature. The reason for this mix-up is that temperatures outdoor are driven by time indeed, but also some other factors such as climate processes, which do change a lot (for various reasons). To understand phenology better we should thus look at data that influences phenology is directly as possible. Time is nice, but temperature is better, and temperature data is widely available! The resolution of our data is important, as monthly or daily temperatures can make a huge difference, as we already know about two temperature driven processes, forcing and chilling. During the two processes, different temperatures are required.
  3. What do we need to know to build a process-based model from this?
    • We need phenology data and daily or even hourly temperature data. We also need some programming skills.

PLS analysis of ‘Alexander Lucas’ pears in Klein-Altendorf (Chapter 18)

Exercises on chill model comparison

  1. Briefly explain why you shouldn’t take the results of a PLS regression analysis between temperature and phenology at face value. What do you need in addition in order to make sense of such outputs?
    • As PLS usually applied (and also designed) for detecting a trend in a very large data set, it should usually not be applied on a small data set such as most phenology data sets. Only with a clear picture of ongoing processes in mind this can empirically support a thesis based on our ecological understanding, not the other way around. The difference is to have concrete expectations and explainations prior to the analysis. Don’t get tainted by running the analysis before your literature research!
  2. Replicate the PLS analysis for the Roter Boskoop dataset that you used in a previous lesson.
# Loading bloom data for Roter Boskoop
rot_bskp <- read_tab("data/Roter_Boskoop_bloom_1958_2019.csv")
rot_bskp_first <- rot_bskp[, 1:2]
rot_bskp_first[, "Year"] <- substr(rot_bskp_first$First_bloom, 1, 4)
rot_bskp_first[, "Month"] <- substr(rot_bskp_first$First_bloom, 5, 6)
rot_bskp_first[, "Day"] <- substr(rot_bskp_first$First_bloom, 7, 8)
rot_bskp_first <- make_JDay(rot_bskp_first)
rot_bskp_first <- rot_bskp_first[, c("Pheno_year", "JDay")]
colnames(rot_bskp_first) <- c("Year", "pheno")

# Loading Klein Altendorf weather data
KA_temps <- read_tab("data/TMaxTMin1958-2019_patched.csv")
KA_temps <- make_JDay(KA_temps)
PLS_results <- PLS_pheno(KA_temps, rot_bskp_first)
source(file = "treephenology_functions.R")
ggplot_PLS(PLS_results)

3. Write down your thoughts on why we’re not seeing the temperature response pattern we may have expected. What happened to the chill response? * Our PLS analysis shows the time at which different temperatures are positively or negatively effecting our result, in this case blooming dates. So variation is needed to show, in which direction temperatures are effecting our blooming date. If it is always cold enough to overcome endo-dormancy (fast enough), the PLS analysis can’t possibly detect anything, because the difference in temperature in November and December doesn’t change the bloom date in this cold region. The requirement is simply always met. Heat in spring on the other hand is quite a problem for our temperate climate and thus even slight variations will be quickly represented by blooming date.

Successes and limitations of PLS regression analysis (Chapter 19)

Exercises on chill model comparison

  1. Briefly explain in what climatic settings we can expect PLS regression to detect the chilling phase - and in what settings this probably won’t work.
    • Dependent on the chill model being used, temperatures that have a reasonable amount of data points on the positive side outside the region of chill accumulation can be expected to work. A lot of data points on the negative side (colder places) are not affected, as there always is a period of transition through the optimal chill accumulation temperature zone. To put it in a nutshell, sufficient warm temperatures outside the chill accumulation region are needed to identify the chilling period with the PLS analysis.
  2. How could we overcome this problem?
    • Instead of only looking at the temperature, we could take a look at the two phases separately (chilling and forcing) and correlate the chill accumulation and heat accumulation separately.

PLS regression with agroclimatic metrics (Chapter 20)

Exercises on chill model comparison:

  1. Repeat the PLS_chill_force procedure for the ‘Roter Boskoop’ dataset. Include plots of daily chill and heat accumulation.
  2. Run PLS_chill_force analyses for all three major chill models. Delineate your best estimates of chilling and forcing phases for all of them.
  3. Plot results for all three analyses, including shaded plot areas for the chilling and forcing periods you estimated.

Bibliography

Fadón, E., M. Herrero, and J. Rodrigo. 2015. “Flower Development in Sweet Cherry Framed in the BBCH Scale.” Scientia Horticulturae 192 (August): 141–47. https://doi.org/10.1016/j.scienta.2015.05.027.
Luedeling, Eike, Roeland Kindt, Neil I. Huth, and Konstantin Koenig. 2014. “Agroforestry Systems in a Changing Climate—Challenges in Projecting Future Performance.” Current Opinion in Environmental Sustainability 6 (February): 1–7. https://doi.org/10.1016/j.cosust.2013.07.013.
Luedeling, Eike, Achim Kunz, and Michael M. Blanke. 2013. “Identification of Chilling and Heat Requirements of Cherry Trees—a Statistical Approach.” International Journal of Biometeorology 57 (5): 679–89. https://doi.org/10.1007/s00484-012-0594-y.